You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Create two apps running locally, a frontend app and a backend app. The frontend app should
make an API request to the backend app at its teleport public_addr
You can use this example app if you don't have a frontend/backend setup
package main
import (
"encoding/json""fmt""log""net/http"
)
// change to your cluster addrconstclusterName="avatus.sh"funcmain() {
// handler for the html page. this is the "client".http.HandleFunc("/", func(w http.ResponseWriter, r*http.Request) {
html:=fmt.Sprintf(html, clusterName)
w.Header().Set("Content-Type", "text/html")
w.Write([]byte(html))
})
// Handler for the API endpointhttp.HandleFunc("/api/data", func(w http.ResponseWriter, r*http.Request) {
w.Header().Set("Access-Control-Allow-Origin", fmt.Sprintf("https://client.%s", clusterName))
w.Header().Set("Access-Control-Allow-Credentials", "true")
data:=map[string]string{"hello": "world"}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(data)
})
log.Println("Server starting on http://localhost:8080")
log.Fatal(http.ListenAndServe(":8080", nil))
}
consthtml=`<!DOCTYPE html><html lang="en"><head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>API Data Fetcher</title></head><body> <div id="result"></div> <div id="cors-result"></div> <script> fetch('https://api.%s/api/data', { credentials: 'include' }) .then(response => response.json()) .then(data => { document.getElementById('result').textContent = JSON.stringify(data); }) .catch(error => console.error('Error:', error)); </script></body></html>`
Update your app service to serve the apps like this (update your public addr to what makes sense for your cluster)
Create a role with limited permissions allow-roles-and-nodes. This role allows you to see the Role screen and ssh into all nodes.
kind: role
metadata:
name: allow-roles-and-nodes
spec:
allow:
logins:
- root
node_labels:
'*': '*'
rules:
- resources:
- role
verbs:
- list
- read
options:
max_session_ttl: 8h0m0s
version: v5
Create another role with limited permissions allow-users-with-short-ttl. This role session expires in 4 minutes, allows you to see Users screen, and denies access to all nodes.
kind: role
metadata:
name: allow-users-with-short-ttl
spec:
allow:
rules:
- resources:
- user
verbs:
- list
- read
deny:
node_labels:
'*': '*'
options:
max_session_ttl: 4m0s
version: v5
Create a user that has no access to anything but allows you to request roles:
Verify that assuming allow-roles-and-nodes allows you to see roles screen and ssh into nodes
After assuming allow-roles-and-nodes, verify that assuming allow-users-with-short-ttl allows you to see users screen, and denies access to nodes
Verify a switchback banner is rendered with roles assumed, and count down of when it expires
Verify that you can access nodes after Drop Request on allow-users-with-short-ttl while allow-roles-and-nodes is still assumed
Verify after re-assuming allow-users-with-short-ttl role that the next action (i.e. opening a new tab with unified resources) triggers a relogin modal after the expiry is met (4 minutes)
kind: role
metadata:
name: waiting-room
spec:
allow:
request:
roles:
- <some other role to assign user after approval>
options:
max_session_ttl: 8h0m0s
request_access: reason
request_prompt: <some custom prompt to show in reason dialogue>
version: v3
Verify after login, reason dialogue is rendered with prompt set to request_prompt setting
Verify after clicking send request, pending dialogue renders
Verify after approving a request, dashboard is rendered
Verify the correct role was assigned
Strategy Always
With the previous role you created from Strategy Reason, change request_access to always:
Verify after login, pending dialogue is auto rendered
Verify after approving a request, dashboard is rendered
Verify after denying a request, access denied dialogue is rendered
Verify a switchback banner is rendered with roles assumed, and count down of when it expires
Verify switchback button says Logout and clicking goes back to the login screen
Strategy Optional
With the previous role you created from Strategy Reason, change request_access to optional:
Verify after login, dashboard is rendered as normal
For those you might want to use clusters that are deployed on the web, specified in
parens. Or set up the connectors on a local enterprise cluster following the guide from
our wiki.
Verify that the shell is pinned to the correct cluster (for root clusters and leaf
clusters).
That is, opening new shell sessions in other workspaces or other clusters within the same
workspace should have no impact on the original shell session.
Verify that the local shell is opened with correct env vars.
TELEPORT_PROXY and TELEPORT_CLUSTER should pin the session to the correct cluster.
TELEPORT_HOME should point to ~/Library/Application Support/Teleport Connect/tsh.
PATH should include /Applications/Teleport Connect.app/Contents/Resources/bin.
Verify that the working directory in the tab title is updated when you change the directory
(only for local terminals).
Verify that terminal resize works for both local and remote shells.
Install midnight commander on the node you ssh into: $ sudo apt-get install mc
Run the program: $ mc
Resize Teleport Connect to see if the panels resize with it
Verify that the tab automatically closes on $ exit command.
Open a new kubernetes tab, run echo $KUBECONFIG and check if it points to the file within Connect's app data directory.
Close the tab and open it again (to the same resource). Verify that the kubeconfig path didn't change.
Run kubectl get pods -A and verify that the command succeeds. Then create a pod with kubectl apply -f https://k8s.io/examples/application/shell-demo.yaml and exec into it with kubectl exec --stdin --tty shell-demo -- /bin/bash. Verify that the shell works.
For execing into a pod, you might need to create a ClusterRoleBinding in
k8s
for the admin role.
Then you need to add the k8s group (which maps to the k8s admin role in ClusterRoleBinding) to kubernetes_groups of your Teleport role.
Repeat the above check for a k8s cluster connected to a leaf cluster.
Verify that the kubeconfig file is removed when the user:
Verify that reopening the app after removing ~/Library/Application Support/Teleport Connect/tsh doesn't crash the app.
Verify that reopening the app after removing ~/Library/Application Support/Teleport Connect/app_state.json but not the tsh dir doesn't crash the app.
Verify that logging out of a cluster and then logging in to the same cluster doesn't
remember previous tabs (they should be cleared on logout).
Open a db connection tab. Change the db name and port. Close the tab. Restart the app. Open
connection tracker and choose said db connection from it. Verify that the newly opened tab uses
the same db name and port.
Log in to a cluster. Close the DocumentCluster tab. Open a new DocumentCluster tab. Restart
the app. Verify that the app doesn't ask you about restoring previous tabs.
Verify that logging in to a new cluster adds it to the identity switcher and switches to
the workspace of that cluster automatically.
Verify that the state of the current workspace is preserved when you change the workspace
(by switching to another cluster) and return to the previous workspace.
Click "Add another cluster", provide an address to a cluster that was already added. Verify
that Connect simply changes the workspace to that of that cluster.
Click "Add another cluster", provide an address to a new cluster and submit the form. Close
the modal when asked for credentials. Verify that the cluster was still added and is visible in
the profile selector.
Verify that you can connect to all three resources types on root clusters and leaf
clusters.
Verify that picking a resource filter and a cluster filter at the same time works as
expected.
Verify that connecting to a resource from a different root cluster switches to the
workspace of that root cluster.
Shut down a root cluster.
Verify that attempting to search returns "Some of the search results are incomplete" in
the search bar.
Verify that clicking "Show details" next to the error message and then closing the modal
by clicking one of the buttons or by pressing Escape does not close the search bar.
Log in as a user with a short TTL. Make sure you're not logged in to any other cluster. Wait for
the cert to expire. Enter a search term that usually returns some results.
Relogin when asked. Verify that the search bar is not collapsed and shows search
results.
Close the login modal instead of logging in. Verify that the search bar is not collapsed
and shows "No matching results found".
Resilience when resources become unavailable @gzdunek
DocumentCluster
For each scenario, create at least one DocumentCluster tab for each available resource kind.
For each scenario, first do the action described in the bullet point, then refetch list of
resources by entering the search field and pressing enter. Verify that no unrecoverable
error was raised (that is, the app still works). Then restart the app and verify that it was
restarted gracefully (no unrecoverable error on restart, the user can continue using the
app).
Stop the root cluster.
Stop a leaf cluster.
Disconnect your device from the internet.
DocumentGateway
Verify that you can't open more than one tab for the same db server + username pair.
Trying to open a second tab with the same pair should just switch you to the already
existing tab.
Create a db connection tab for a given database. Then remove access to that db for that
user. Go back to Connect and change the database name and port. Both actions should not
return an error.
Open DocumentCluster and make sure a given db is visible on the list of available dbs.
Click "Connect" to show a list of db users. Now remove access to that db. Go back to Connect
and choose a username. Verify that a recoverable error is shown and the user can continue
using the app.
Create a db connection, close the app, run tsh proxy db with the same port, start the
app. Verify that the app doesn't crash and the db connection tab shows you the error
(address in use) and offers a way to retry creating the connection.
To test scenarios from this section, create a user with a role that has TTL of 1m
(spec.options.max_session_ttl).
Log in, create a db connection and run the CLI command; wait for the cert to expire, make
another connection to the local db proxy.
Verify that the window received focus and a modal login is shown.
Verify that this works on:
macOS
Windows
Linux
Verify that after successfully logging in:
The cluster info is synced.
The first connection wasn't dropped; try executing select now();, the client should
be able to automatically reinstantiate the connection.
The database proxy is able to handle new connections; click "Run" in the db tab and
see if it connects without problems. You might need to resync the cluster again in case
they managed to expire.
Verify that closing the login modal without logging in shows an appropriate error.
Log in, create a db connection, then remove access to that db server for that user; wait for
the cert to expire, then attempt to make a connection through the proxy; log in.
Verify that psql shows an appropriate access denied error ("access to db denied. User
does not have permissions. Confirm database user and name").
Log in, open a cluster tab, wait for the cert to expire. Switch from a servers view to
databases view.
Verify that a login modal was shown.
Verify that after logging in, the database list is shown.
Log in, set up two db connections. Wait for the cert to expire. Attempt to connect to the first
proxy, then without logging in proceed to connect to the second proxy.
Verify that an error notification was shown related to another login attempt being in
progress.
To setup a test environment, follow the steps laid out in Creating Access Requests (Role Based) from the Web UI testplan and then verify the tasks below.
Verify that under requestable roles, only allow-roles-and-nodes and allow-users-with-short-ttl are listed
Verify you can select/input/modify reviewers
Verify you can view the request you created from request list (should be in a pending
state)
Verify there is list of reviewers you selected (empty list if none selected AND
suggested_reviewers wasn't defined)
Verify you can't review own requests
Creating Access Requests (Search Based)
To setup a test environment, follow the steps laid out in Creating Access Requests (Resource Based) from the Web UI testplan and then verify the tasks below.
Verify that a user can see resources based on the searcheable-resources rules
Verify you can select/input/modify reviewers
Verify you can view the request you created from request list (should be in a pending
state)
Verify there is list of reviewers you selected (empty list if none selected AND
suggested_reviewers wasn't defined)
Verify you can't review own requests
Verify that you can mix adding resources from the root and leaf clusters.
Verify that you can't mix roles and resources into the same request.
Verify that you can request resources from both the unified view and the search bar.
Viewing & Approving/Denying Requests
To setup a test environment, follow the steps laid out in Viewing & Approving/Denying Requests from the Web UI testplan and then verify the tasks below.
Verify you can view access request from request list
Verify you can approve a request with message, and immediately see updated state with
your review stamp (green checkmark) and message box
Verify you can deny a request, and immediately see updated state with your review stamp
(red cross)
Verify deleting the denied request is removed from list
Assuming Approved Requests (Role Based)
Verify that assuming allow-roles-and-nodes allows you to see roles screen and ssh into
nodes
After assuming allow-roles-and-nodes, verify that assuming allow-users-with-short-ttl
allows you to see users screen, and denies access to nodes
Verify a switchback banner is rendered with roles assumed, and count down of when it
expires
Verify switching back goes back to your default static role
Verify after re-assuming allow-users-with-short-ttl role, the user is automatically logged
out after the expiry is met (4 minutes)
Assuming Approved Requests (Search Based)
Verify that assuming approved request, allows you to see the resources you've requested.
Assuming Approved Requests (Both)
Verify assume buttons are only present for approved request and for logged in user
Verify that after clicking on the assume button, it is disabled in both the list and in
viewing
Verify that after re-login, requests that are not expired and are approved are assumable
again
Headless auth modal in Connect can be triggered by calling tsh ls --headless --user=<username> --proxy=<proxy>. The cluster needs to have webauthn enabled for it to work.
Verify the basic operations (approve, reject, ignore then accept in the Web UI).
Make a headless request then cancel the command. Verify that the modal in Connect was
closed automatically.
Make a headless request then accept it in the Web UI. Verify that the modal in Connect was
closed automatically.
Make two concurrent headless requests for the same cluster. Verify that Connect shows the
second one after closing the modal for the first request.
Make two concurrent headless requests for two different clusters. Verify that Connect shows
the second one after closing the modal for the first request.
You will need a YubiKey 4.3+ and Teleport Enterprise.
The easiest way to test it is to enable cluster-wide hardware keys enforcement
(set require_session_mfa: hardware_key_touch_and_pin to get both touch and PIN prompts).
Log in. Verify that you were asked for both PIN and touch.
Connect to a database. Verify you were prompted for touch (a PIN prompt can appear too).
Change the PIN (leave the PIV PIN field empty during login to access this flow).
Close the app, disconnect the YubiKey, then reopen the app. Verify the app shows an error about the missing key.
Verify the happy path from clean slate (no existing role) setup: set up the node and then
connect to it.
Kill the agent while its joining the cluster and verify that the logs from the agent process
are shown in the UI.
The easiest way to do this is by following the agent cleanup daemon logs (tail -F ~/Library/Application\ Support/Teleport\ Connect/logs/cleanup.log) and then kill -s KILL <agent PID>.
After setup in the status view. Verify that the page says that the process exited with
SIGKILL.
Open the node config, change the proxy address to an incorrect one to simulate problems
with connection. Verify that the app kills the agent after the agent is not able to join the
cluster within the timeout.
Verify autostart behavior. The agent should automatically start on app start unless it was
manually stopped before exiting the app.
VNet doesn't work with local clusters made available under custom domains through entries in /etc/hosts. It's best to use a "real" cluster. nip.io might work, but it hasn't been confirmed
yet.
Verify that Connect asks for relogin when attempting to connect to an app after cert expires.
Be mindful that you need to connect to the app at least once before the cert expires for
Connect to properly recognize it as a TCP app.
Start VNet, then stop it.
Verify that the VNet panel doesn't show any errors related to VNet being stopped.
Start VNet. While its running, kill the admin process.
The easiest way to find the PID of the admin process is to open Activity Monitor, View →
All Processes, Hierarchically, search for tsh and find tsh running under kernel_task →
launchd → tsh, owned by root. Then just sudo kill -s KILL <tsh pid>.
Verify that the admin process leaves files in /etc/resolver. However, it's possible to
start VNet again, connect to a TCP app, then shut VNet down and it results in the files being
cleaned up.
Start VNet in a clean macOS VM. Verify that on the first VNet start, macOS shows the prompt
for enabling the background item for tsh.app. Accept it and verify that you can connect to a TCP
app through VNet. @ravicious
Verify that logs are collected for all processes (main, renderer, shared, tshd) under ~/Library/Application\ Support/Teleport\ Connect/logs.
Verify that the password from the login form is not saved in the renderer log.
Log in to a cluster, then log out and log in again as a different user. Verify that the app
works properly after that.
Clean the Application Support dir for Connect. Start the latest stable version of the app.
Open every possible document. Close the app. Start the current alpha. Reopen the tabs. Verify that
the app was able to reopen the tabs without any errors.
The text was updated successfully, but these errors were encountered:
Uh oh!
There was an error while loading. Please reload this page.
Web UI
Main
For main, test with a role that has access to all resources.
As you go through testing, click on any links you come across to make sure they work (no 404) and are up to date.
Trusted Cluster (leafs) (@rudream)
The following features should allow users to view resources in trusted clusters.
There should be a cluster dropdown for:
/web/cluster/<cluster-name>/console/nodes
)Navigation (@rudream)
Resources
(unified resources),Access Management
,Access Requests
,Active Sessions
,Notification Bell
anduser settings menu
User Settings Menu @bl-nero
Unified Resources
Add Resource
button correctly sends to the resource discovery pageforward_agent: true
under theoptions
section of your role, and then test that yourteleport certs show up when you run
ssh-add -l
on the node.Launch
button for applications correctly send to the appLaunch
button for AWS apps correctly renders an IAM role selection windowConnect
renders the dialog with correct informationConnect
renders the dialog with correct informationConnect
renders a login selection and that the logins are completely in viewActive Sessions (@rudream)
Access Management Side Nav (@rudream)
Session Recordings (@rudream)
Audit log (@rudream)
details
buttonUsers (@rudream)
All actions should require re-authn with a webauthn device.
Invite, Reset, and Login Forms (@rudream)
For each, test the invite, reset, and login flows
second_factors
set to["otp"]
, requires otpsecond_factors
set to["webauthn"]
, requires hardware keysecond_factors
set to["webauthn", "otp"]
, requires a MFA deviceAuth Connectors
For help with setting up auth connectors, check out the [Quick GitHub/SAML/OIDC Setup Tips]
All actions should require re-authn with a webauthn device.
Roles (@rudream)
All actions should require re-authn with a webauthn device.
Enroll New Integration (aka Plugins) @kimlisa
self-hosted plugins
andmachine id
cards link out to the correct docsno-code integrations
renders formOkta Integration @kiosion
Enroll new resources using Discover Wizard @kimlisa
Use Discover Wizard to enroll new resources and access them:
Access Lists @kiosion
note: there seems to be two access list section, check the other section out first?
Check it's not available for OSS
Admin refers to users with access_list RBAC defined:
Session & Identity Locks
Message
are shown with this field as empty.Expiration
field are shown with this field as "Never".Locked Items
column.Add Target
button into aRemove
button.Trusted Devices @kimlisa
Managed Clusters (@rudream)
root
pillApplication Access @kimlisa
Required Applications
Create two apps running locally, a frontend app and a backend app. The frontend app should
make an API request to the backend app at its teleport public_addr
You can use this example app if you don't have a frontend/backend setup
Update your app service to serve the apps like this (update your public addr to what makes sense for your cluster)
Launch your cluster and make sure you are logged out of your api by going to
https://api.avatus.sh/teleport-logout
{"hello":"world"}
responseAccess Requests
Not available for OSS
Access Request Notification Routing Rule (cloud only) @kimlisa
is this cloud only now? check with bernard
Creating Access Requests (Role Based) @kiosion
Create a role with limited permissions
allow-roles-and-nodes
. This role allows you to see the Role screen and ssh into all nodes.Create another role with limited permissions
allow-users-with-short-ttl
. This role session expires in 4 minutes, allows you to see Users screen, and denies access to all nodes.Create a user that has no access to anything but allows you to request roles:
allow-roles-and-nodes
andallow-users-with-short-ttl
are listedCreating Access Requests (Resource Based) @kiosion
Create a role with access to searcheable resources (apps, db, kubes, nodes, desktops). The template
searcheable-resources
is below.Create a user that has no access to resources, but allows you to search them:
searcheable-resources
rulesViewing & Approving/Denying Requests @kiosion
Create a user with the role
reviewer
that allows you to review all requests, and delete them.Assuming Approved Requests (Role Based) @kiosion
allow-roles-and-nodes
allows you to see roles screen and ssh into nodesallow-roles-and-nodes
, verify that assumingallow-users-with-short-ttl
allows you to see users screen, and denies access to nodesDrop Request
onallow-users-with-short-ttl
whileallow-roles-and-nodes
is still assumedallow-users-with-short-ttl
role that the next action (i.e. opening a new tab with unified resources) triggers a relogin modal after the expiry is met (4 minutes)Assuming Approved Requests (Search Based) @kiosion
Assuming Approved Requests (Both) @kiosion
Access Request Waiting Room @kiosion
Strategy Reason
Create the following role:
request_prompt
settingsend request
, pending dialogue rendersStrategy Always
With the previous role you created from
Strategy Reason
, changerequest_access
toalways
:Logout
and clicking goes back to the login screenStrategy Optional
With the previous role you created from
Strategy Reason
, changerequest_access
tooptional
:Access Lists @kiosion
Not available for OSS
Web Terminal (aka console)
ctrl+[1...9]
(alt on linux/windows)require_session_mfa
and:Terminal Node List Tab
Terminal Session Tab
$ sudo apt-get install mc
$ mc
Cloud (@rudream)
From your cloud staging account, change the field
teleportVersion
to the test version.Dashboard Tenants (self-hosted license)
Recovery Code Management
Invite/Reset
Recovery Flow: Add new mfa device
Recovery Flow: Change password
Recovery Email
RBAC (@rudream)
Create a role, with no
allow.rules
defined:Access
top-level navigation item.Audit
top-level navigation item only containsActive Sessions
.Policy
top-level navigation item, while the admin does.Identity
top-level navigation item only containsAccess Requests
andAccess Lists
.Add New
top-level navigation item only containsResource
andAccess List
.Identity
top-level navigation item.Add New
top-level navigation item only containsResource
.Enroll New Resource
button is disabled on the Resources screen.Note: User has read/create access_request access to their own requests, despite resource settings
Add the following under
spec.allow.rules
to enable read access to the audit log:Audit Log
is accessibleAdd the following to enable list access to session recordings:
Session Recordings
is accessibleChange the session permissions to enable read access to recorded sessions:
Add the following to enable read access to the roles:
Add the following to enable read access to the auth connectors
Add the following to enable read access to users
Add the following to enable read access to trusted clusters
Teleport Connect
Auth methods @gzdunek
(
auth_service.authentication
in the cluster config):type: local
,second_factors: ["otp"]
type: local
,second_factors: ["webauthn"]
,type: local
,second_factors: ["webauthn"]
, log in passwordlessly with hardware keytype: local
,second_factors: ["webauthn"]
, log in passwordlessly with touch IDtype: local
,second_factors: ["webauthn", "otp"]
, log in with OTPtype: local
,second_factors: ["webauthn", "otp"]
, log in with hardware keytype: local
,second_factors: ["webauthn", "otp"]
, log in with passwordless authcapabilities to multiple users.
parens. Or set up the connectors on a local enterprise cluster following the guide from
our wiki.
Shell @ravicious
clusters).
workspace should have no impact on the original shell session.
TELEPORT_PROXY
andTELEPORT_CLUSTER
should pin the session to the correct cluster.TELEPORT_HOME
should point to~/Library/Application Support/Teleport Connect/tsh
.PATH
should include/Applications/Teleport Connect.app/Contents/Resources/bin
.(only for local terminals).
$ sudo apt-get install mc
$ mc
$ exit
command.process under it.
Kubernetes access @gzdunek
echo $KUBECONFIG
and check if it points to the file within Connect's app data directory.kubectl get pods -A
and verify that the command succeeds. Then create a pod withkubectl apply -f https://k8s.io/examples/application/shell-demo.yaml
and exec into it withkubectl exec --stdin --tty shell-demo -- /bin/bash
. Verify that the shell works.ClusterRoleBinding
ink8s
for the admin role.
Then you need to add the k8s group (which maps to the k8s admin role in
ClusterRoleBinding
) tokubernetes_groups
of your Teleport role.Desktop access (find a desktop on asteroid.earth cluster or set it up manually
https://goteleport.com/docs/enroll-resources/desktop-access/getting-started/) @gzdunek
State restoration from disk @ravicious
properly.
assigned works.
~/Library/Application Support/Teleport Connect/tsh
doesn't crash the app.~/Library/Application Support/Teleport Connect/app_state.json
but not thetsh
dir doesn't crash the app.remember previous tabs (they should be cleared on logout).
connection tracker and choose said db connection from it. Verify that the newly opened tab uses
the same db name and port.
the app. Verify that the app doesn't ask you about restoring previous tabs.
Connections picker @ravicious
belong to.
Cluster resources @gzdunek
spec.allow.logins
andspec.allow.db_users
.Tabs @ravicious
Shortcuts @gzdunek
Cmd+[1...9]
.elements.
Workspaces & cluster management @ravicious
the workspace of that cluster automatically.
(by switching to another cluster) and return to the previous workspace.
that Connect simply changes the workspace to that of that cluster.
the modal when asked for credentials. Verify that the cluster was still added and is visible in
the profile selector.
Search bar @ravicious
clusters.
expected.
workspace of that root cluster.
the search bar.
by clicking one of the buttons or by pressing Escape does not close the search bar.
the cert to expire. Enter a search term that usually returns some results.
results.
and shows "No matching results found".
Resilience when resources become unavailable @gzdunek
resources by entering the search field and pressing enter. Verify that no unrecoverable
error was raised (that is, the app still works). Then restart the app and verify that it was
restarted gracefully (no unrecoverable error on restart, the user can continue using the
app).
Trying to open a second tab with the same pair should just switch you to the already
existing tab.
user. Go back to Connect and change the database name and port. Both actions should not
return an error.
Click "Connect" to show a list of db users. Now remove access to that db. Go back to Connect
and choose a username. Verify that a recoverable error is shown and the user can continue
using the app.
tsh proxy db
with the same port, start theapp. Verify that the app doesn't crash and the db connection tab shows you the error
(address in use) and offers a way to retry creating the connection.
File transfer @ravicious
Refreshing certs @ravicious
1m
(
spec.options.max_session_ttl
).another connection to the local db proxy.
select now();
, the client shouldbe able to automatically reinstantiate the connection.
see if it connects without problems. You might need to resync the cluster again in case
they managed to expire.
the cert to expire, then attempt to make a connection through the proxy; log in.
does not have permissions. Confirm database user and name").
databases view.
proxy, then without logging in proceed to connect to the second proxy.
progress.
Access Requests @ravicious
Creating Access Requests (Role Based)
from the Web UI testplan and then verify the tasks below.allow-roles-and-nodes
andallow-users-with-short-ttl
are listedstate)
suggested_reviewers wasn't defined)
Creating Access Requests (Resource Based)
from the Web UI testplan and then verify the tasks below.searcheable-resources
rulesstate)
suggested_reviewers wasn't defined)
Viewing & Approving/Denying Requests
from the Web UI testplan and then verify the tasks below.your review stamp (green checkmark) and message box
(red cross)
allow-roles-and-nodes
allows you to see roles screen and ssh intonodes
allow-roles-and-nodes
, verify that assumingallow-users-with-short-ttl
allows you to see users screen, and denies access to nodes
expires
switching back
goes back to your default static roleallow-users-with-short-ttl
role, the user is automatically loggedout after the expiry is met (4 minutes)
viewing
again
Configuration @gzdunek
⋮
> Open Config File opens theapp_config.json
file in your editor.terminal.fontFamily
."keymap.tab1": "ABC"
)."keymap.tab1": not a string
).Headless auth @gzdunek
tsh ls --headless --user=<username> --proxy=<proxy>
. The cluster needs to have webauthn enabled for it to work.closed automatically.
closed automatically.
second one after closing the modal for the first request.
the second one after closing the modal for the first request.
Per-session MFA @ravicious
MFA.
kubectl exec --stdin --tty shell-demo -- /bin/bash
mentioned above toverify that Kube access is working with MFA.
Hardware key support @ravicious
The easiest way to test it is to enable cluster-wide hardware keys enforcement
(set
require_session_mfa: hardware_key_touch_and_pin
to get both touch and PIN prompts).Connect My Computer @ravicious
connect to it.
are shown in the UI.
tail -F ~/Library/Application\ Support/Teleport\ Connect/logs/cleanup.log
) and thenkill -s KILL <agent PID>
.SIGKILL.
with connection. Verify that the app kills the agent after the agent is not able to join the
cluster within the timeout.
manually stopped before exiting the app.
VNet @gzdunek
/etc/hosts
. It's best to use a "real" cluster. nip.io might work, but it hasn't been confirmedyet.
Connect to properly recognize it as a TCP app.
All Processes, Hierarchically, search for
tsh
and find tsh running under kernel_task →launchd → tsh, owned by root. Then just
sudo kill -s KILL <tsh pid>
./etc/resolver
. However, it's possible tostart VNet again, connect to a TCP app, then shut VNet down and it results in the files being
cleaned up.
for enabling the background item for tsh.app. Accept it and verify that you can connect to a TCP
app through VNet. @ravicious
Misc @gzdunek
~/Library/Application\ Support/Teleport\ Connect/logs
.works properly after that.
Open every possible document. Close the app. Start the current alpha. Reopen the tabs. Verify that
the app was able to reopen the tabs without any errors.
The text was updated successfully, but these errors were encountered: